543 research outputs found

    Finite-State Dimension and Lossy Decompressors

    Get PDF
    This paper examines information-theoretic questions regarding the difficulty of compressing data versus the difficulty of decompressing data and the role that information loss plays in this interaction. Finite-state compression and decompression are shown to be of equivalent difficulty, even when the decompressors are allowed to be lossy. Inspired by Kolmogorov complexity, this paper defines the optimal *decompression *ratio achievable on an infinite sequence by finite-state decompressors (that is, finite-state transducers outputting the sequence in question). It is shown that the optimal compression ratio achievable on a sequence S by any *information lossless* finite state compressor, known as the finite-state dimension of S, is equal to the optimal decompression ratio achievable on S by any finite-state decompressor. This result implies a new decompression characterization of finite-state dimension in terms of lossy finite-state transducers.Comment: We found that Theorem 3.11, which was basically the motive for this paper, was already proven by Sheinwald, Ziv, and Lempel in 1991 and 1995 paper

    Depth, Highness and DNR degrees

    Get PDF
    We study Bennett deep sequences in the context of recursion theory; in particular we investigate the notions of O(1)-deepK, O(1)-deepC , order-deep K and order-deep C sequences. Our main results are that Martin-Loef random sets are not order-deepC , that every many-one degree contains a set which is not O(1)-deepC , that O(1)-deepC sets and order-deepK sets have high or DNR Turing degree and that no K-trival set is O(1)-deepK.Comment: journal version, dmtc

    Dimensions of Copeland-Erdos Sequences

    Full text link
    The base-kk {\em Copeland-Erd\"os sequence} given by an infinite set AA of positive integers is the infinite sequence \CE_k(A) formed by concatenating the base-kk representations of the elements of AA in numerical order. This paper concerns the following four quantities. The {\em finite-state dimension} \dimfs (\CE_k(A)), a finite-state version of classical Hausdorff dimension introduced in 2001. The {\em finite-state strong dimension} \Dimfs(\CE_k(A)), a finite-state version of classical packing dimension introduced in 2004. This is a dual of \dimfs(\CE_k(A)) satisfying \Dimfs(\CE_k(A)) \geq \dimfs(\CE_k(A)). The {\em zeta-dimension} \Dimzeta(A), a kind of discrete fractal dimension discovered many times over the past few decades. The {\em lower zeta-dimension} \dimzeta(A), a dual of \Dimzeta(A) satisfying \dimzeta(A)\leq \Dimzeta(A). We prove the following. \dimfs(\CE_k(A))\geq \dimzeta(A). This extends the 1946 proof by Copeland and Erd\"os that the sequence \CE_k(\mathrm{PRIMES}) is Borel normal. \Dimfs(\CE_k(A))\geq \Dimzeta(A). These bounds are tight in the strong sense that these four quantities can have (simultaneously) any four values in [0,1][0,1] satisfying the four above-mentioned inequalities.Comment: 19 page

    Martingale families and dimension in P

    Get PDF
    AbstractWe introduce a new measure notion on small complexity classes (called F-measure), based on martingale families, that gets rid of some drawbacks of previous measure notions: it can be used to define dimension because martingale families can make money on all strings, and it yields random sequences with an equal frequency of 0’s and 1’s. On larger complexity classes (E and above), F-measure is equivalent to Lutz resource-bounded measure. As applications to F-measure, we answer a question raised in [E. Allender, M. Strauss, Measure on small complexity classes, with application for BPP, in: Proc. of the 35th Ann. IEEE Symp. on Found. of Comp. Sci., 1994, pp. 807–818] by improving their result to: for almost every language A decidable in subexponential time, PA=BPPA. We show that almost all languages in PSPACE do not have small non-uniform complexity. We compare F-measure to previous notions and prove that martingale families are strictly stronger than Γ-measure [E. Allender, M. Strauss, Measure on small complexity classes, with application for BPP, in: Proc. of the 35th Ann. IEEE Symp. on Found. of Comp. Sci., 1994, pp. 807–818], we also discuss the limitations of martingale families concerning finite unions. We observe that all classes closed under polynomial many-one reductions have measure zero in EXP iff they have measure zero in SUBEXP. We use martingale families to introduce a natural generalization of Lutz resource-bounded dimension [J.H. Lutz, Dimension in complexity classes, in: Proceedings of the 15th Annual IEEE Conference on Computational Complexity, 2000, pp. 158–169] on P, which meets the intuition behind Lutz’s notion. We show that P-dimension lies between finite-state dimension and dimension on E. We prove an analogue of a Theorem of Eggleston in P, i.e. the class of languages whose characteristic sequence contains 1’s with frequency α, has dimension the Shannon entropy of α in P

    A Computational Theory of Subjective Probability

    Get PDF
    In this article we demonstrate how algorithmic probability theory is applied to situations that involve uncertainty. When people are unsure of their model of reality, then the outcome they observe will cause them to update their beliefs. We argue that classical probability cannot be applied in such cases, and that subjective probability must instead be used. In Experiment 1 we show that, when judging the probability of lottery number sequences, people apply subjective rather than classical probability. In Experiment 2 we examine the conjunction fallacy and demonstrate that the materials used by Tversky and Kahneman (1983) involve model uncertainty. We then provide a formal mathematical proof that, for every uncertain model, there exists a conjunction of outcomes which is more subjectively probable than either of its constituents in isolation.Comment: Maguire, P., Moser, P. Maguire, R. & Keane, M.T. (2013) "A computational theory of subjective probability." In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 960-965). Austin, TX: Cognitive Science Societ

    Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory

    Get PDF
    In this article we review Tononi's (2008) theory of consciousness as integrated information. We argue that previous formalizations of integrated information (e.g. Griffith, 2014) depend on information loss. Since lossy integration would necessitate continuous damage to existing memories, we propose it is more natural to frame consciousness as a lossless integrative process and provide a formalization of this idea using algorithmic information theory. We prove that complete lossless integration requires noncomputable functions. This result implies that if unitary consciousness exists, it cannot be modelled computationally.Comment: Maguire, P., Moser, P., Maguire, R. & Griffith, V. (2014). Is consciousness computable? Quantifying integrated information using algorithmic information theory. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Societ
    • …
    corecore